10 research outputs found

    Modelling 3D humans : pose, shape, clothing and interactions

    Get PDF
    Digital humans are increasingly becoming a part of our lives with applications like animation, gaming, virtual try-on, Metaverse and much more. In recent years there has been a great push to make our models of digital humans as real as possible. In this thesis we present methodologies to model two key characteristics of real humans, their appearance and actions. This thesis covers four innovations: (i) MGN, the first approach to reconstruct 3D garments and body shape underneath, as separate meshes, from a few RGB images of a person. This allows, for the first time, real world applications like texture transfer, garment transfer and virtual try-on in 3D, using just images. (ii) IPNet, a neural network, that leverages implicit functions for detailed reconstruction and registers the reconstructed mesh with the parametric SMPL model to make it controllable for real world tasks like animation and editing. (iii) LoopReg, a novel formulation that makes 3D registration task end-to-end differentiable for the first time. Semi-supervised LoopReg outperforms contemporary supervised methods using ∼100x less supervised data. (iv) BEHAVE the first dataset and method to track full body real interactions between humans and movable objects. All our code, MGN digital wardrobe and BEHAVE dataset are publicly available for further research.Digital humans are increasingly becoming a part of our lives with applications like animation, gaming, virtual try-on, Metaverse and much more. In recent years there has been a great push to make our models of digital humans as real as possible. In this thesis we present methodologies to model two key characteristics of real humans, their appearance and actions. This thesis covers four innovations: (i) MGN, the first approach to reconstruct 3D garments and body shape underneath, as separate meshes, from a few RGB images of a person. This allows, for the first time, real world applications like texture transfer, garment transfer and virtual try-on in 3D, using just images. (ii) IPNet, a neural network, that leverages implicit functions for detailed reconstruction and registers the reconstructed mesh with the parametric SMPL model to make it controllable for real world tasks like animation and editing. (iii) LoopReg, a novel formulation that makes 3D registration task end-to-end differentiable for the first time. Semi-supervised LoopReg outperforms contemporary supervised methods using ∼100x less supervised data. (iv) BEHAVE the first dataset and method to track full body real interactions between humans and movable objects. All our code, MGN digital wardrobe and BEHAVE dataset are publicly available for further research.Der digitale Mensch wird immer mehr zu einem Teil unseres Lebens mit Anwendungen wie Animation, Spielen, virtuellem Ausprobieren, Metaverse und vielem mehr. In den letzten Jahren wurden große Anstrengungen unternommen, um unsere Modelle digitaler Menschen so real wie möglich zu gestalten. In dieser Arbeit stellen wir Methoden zur Modellierung von zwei Schlüsseleigenschaften echter Menschen vor: ihr Aussehen und ihre Handlungen. Wir schlagen MGN vor, den ersten Ansatz zur Rekonstruktion von 3D-Kleidungsstücken und der darunter liegenden Körperform als separate Netze aus einigen wenigen RGB-Bildern einer Person. Wir erweitern das weit verbreitete SMPL-Körpermodell, das nur unbekleidete Formen darstellt, um auch Kleidungsstücke zu erfassen (SMPL+G). SMPL+G kann mit Kleidungsstücken bekleidet werden, die entsprechend dem SMPL-Modell posiert und geformt werden können. Dies ermöglicht zum ersten Mal reale Anwendungen wie Texturübertragung, Kleidungsübertragung und virtuelle Anprobe in 3D, wobei nur Bilder verwendet werden. Wir unterstreichen auch die entscheidende Einschränkung der netzbasierten Darstellung für digitale Menschen, nämlich die Fähigkeit, hochfrequente Details darzustellen. Daher untersuchen wir die neue implizite funktionsbasierte Darstellung als Alternative zur netzbasierten Darstellung (einschließlich parametrischer Modelle wie SMPL) für digitale Menschen. Typischerweise mangelt es den Methoden, die auf letzteren basieren, an Details, während ersteren die Kontrolle fehlt. Wir schlagen IPNet vor, ein neuronales Netzwerk, das implizite Funktionen für eine detaillierte Rekonstruktion nutzt und das rekonstruierte Netz mit dem parametrischen SMPL-Modell registriert, um es kontrollierbar zu machen. Auf diese Weise wird das Beste aus beiden Welten genutzt. Wir untersuchen den Prozess der Registrierung eines parametrischen Modells, wie z. B. SMPL, auf ein 3D-Netz. Dieses jahrzehntealte Problem im Bereich der Computer Vision und der Graphik erfordert in der Regel einen zweistufigen Prozess: i) Herstellung von Korrespondenzen zwischen dem Modell und dem Netz, und ii) Optimierung des Modells, um den Abstand zwischen den entsprechenden Punkten zu minimieren. Dieser zweistufige Prozess ist nicht durchgängig differenzierbar. Wir schlagen LoopReg vor, das eine neue, auf impliziten Funktionen basierende Darstellung des Modells verwendet und die Registrierung differenzierbar macht. Semi-überwachtes LoopReg übertrifft aktuelle überwachte Methoden mit ∼100x weniger überwachten Daten. Die Modellierung des menschlichen Aussehens ist notwendig, aber nicht ausreichend, um realistische digitale Menschen zu schaffen. Wir müssen nicht nur modellieren, wie Menschen aussehen, sondern auch, wie sie mit ihren umgebenden Objekten interagieren. Zu diesem Zweck präsentieren wir mit BEHAVE den ersten Datensatz von realen Ganzkörper-Interaktionen zwischen Menschen und beweglichen Objekten. Wir stellen segmentierte Multiview-RGBDFrames zusammen mit registrierten SMPL- und Objekt-Fits sowie Kontaktannotationen in 3D zur Verfügung. Der BEHAVE-Datensatz enthält ∼15k Frames und seine Erweiterung enthält ∼400k Frames mit Pseudo-Ground-Truth-Annotationen. Unsere BEHAVE-Methode verwendet diesen Datensatz, um ein neuronales Netz zu trainieren, das die Person, das Objekt und die Kontakte zwischen ihnen gemeinsam verfolgt. In dieser Arbeit untersuchen wir die oben genannten Ideen und bieten eine eingehende Analyse unserer Schlüsselideen und Designentscheidungen. Wir erörtern auch die Grenzen unserer Ideen und schlagen künftige Arbeiten vor, um nicht nur diese Grenzen anzugehen, sondern auch die Forschung weiter auszubauen. Unser gesamter Code, die digitale Garderobe und der Datensatz sind für weitere Forschungen öffentlich zugänglich

    CHORE: Contact, Human and Object REconstruction from a single RGB image

    Full text link
    While most works in computer vision and learning have focused on perceiving 3D humans from single images in isolation, in this work we focus on capturing 3D humans interacting with objects. The problem is extremely challenging due to heavy occlusions between human and object, diverse interaction types and depth ambiguity. In this paper, we introduce CHORE, a novel method that learns to jointly reconstruct human and object from a single image. CHORE takes inspiration from recent advances in implicit surface learning and classical model-based fitting. We compute a neural reconstruction of human and object represented implicitly with two unsigned distance fields, and additionally predict a correspondence field to a parametric body as well as an object pose field. This allows us to robustly fit a parametric body model and a 3D object template, while reasoning about interactions. Furthermore, prior pixel-aligned implicit learning methods use synthetic data and make assumptions that are not met in real data. We propose a simple yet effective depth-aware scaling that allows more efficient shape learning on real data. Our experiments show that our joint reconstruction learned with the proposed strategy significantly outperforms the SOTA. Our code and models will be released to foster future research in this direction.Comment: 19 pages, 7 figure

    Visibility Aware Human-Object Interaction Tracking from Single RGB Camera

    Full text link
    Capturing the interactions between humans and their environment in 3D is important for many applications in robotics, graphics, and vision. Recent works to reconstruct the 3D human and object from a single RGB image do not have consistent relative translation across frames because they assume a fixed depth. Moreover, their performance drops significantly when the object is occluded. In this work, we propose a novel method to track the 3D human, object, contacts between them, and their relative translation across frames from a single RGB camera, while being robust to heavy occlusions. Our method is built on two key insights. First, we condition our neural field reconstructions for human and object on per-frame SMPL model estimates obtained by pre-fitting SMPL to a video sequence. This improves neural reconstruction accuracy and produces coherent relative translation across frames. Second, human and object motion from visible frames provides valuable information to infer the occluded object. We propose a novel transformer-based neural network that explicitly uses object visibility and human motion to leverage neighbouring frames to make predictions for the occluded frames. Building on these insights, our method is able to track both human and object robustly even under occlusions. Experiments on two datasets show that our method significantly improves over the state-of-the-art methods. Our code and pretrained models are available at: https://virtualhumans.mpi-inf.mpg.de/VisTrackerComment: accepted to CVPR 202

    TOCH: Spatio-Temporal Object-to-Hand Correspondence for Motion Refinement

    Full text link
    We present TOCH, a method for refining incorrect 3D hand-object interaction sequences using a data prior. Existing hand trackers, especially those that rely on very few cameras, often produce visually unrealistic results with hand-object intersection or missing contacts. Although correcting such errors requires reasoning about temporal aspects of interaction, most previous works focus on static grasps and contacts. The core of our method are TOCH fields, a novel spatio-temporal representation for modeling correspondences between hands and objects during interaction. TOCH fields are a point-wise, object-centric representation, which encode the hand position relative to the object. Leveraging this novel representation, we learn a latent manifold of plausible TOCH fields with a temporal denoising auto-encoder. Experiments demonstrate that TOCH outperforms state-of-the-art 3D hand-object interaction models, which are limited to static grasps and contacts. More importantly, our method produces smooth interactions even before and after contact. Using a single trained TOCH model, we quantitatively and qualitatively demonstrate its usefulness for correcting erroneous sequences from off-the-shelf RGB/RGB-D hand-object reconstruction methods and transferring grasps across objects

    Learning to Reconstruct People in Clothing from a Single RGB Camera

    Full text link
    We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach

    NSF: Neural Surface Fields for Human Modeling from Monocular Depth

    Full text link
    Obtaining personalized 3D animatable avatars from a monocular camera has several real world applications in gaming, virtual try-on, animation, and VR/XR, etc. However, it is very challenging to model dynamic and fine-grained clothing deformations from such sparse data. Existing methods for modeling 3D humans from depth data have limitations in terms of computational efficiency, mesh coherency, and flexibility in resolution and topology. For instance, reconstructing shapes using implicit functions and extracting explicit meshes per frame is computationally expensive and cannot ensure coherent meshes across frames. Moreover, predicting per-vertex deformations on a pre-designed human template with a discrete surface lacks flexibility in resolution and topology. To overcome these limitations, we propose a novel method `\keyfeature: Neural Surface Fields' for modeling 3D clothed humans from monocular depth. NSF defines a neural field solely on the base surface which models a continuous and flexible displacement field. NSF can be adapted to the base surface with different resolution and topology without retraining at inference time. Compared to existing approaches, our method eliminates the expensive per-frame surface extraction while maintaining mesh coherency, and is capable of reconstructing meshes with arbitrary resolution without retraining. To foster research in this direction, we release our code in project page at: https://yuxuan-xue.com/nsf.Comment: Accpted to ICCV 2023; Homepage at: https://yuxuan-xue.com/ns

    Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes

    Full text link
    Most learning methods for 3D data (point clouds, meshes) suffer significant performance drops when the data is not carefully aligned to a canonical orientation. Aligning real world 3D data collected from different sources is non-trivial and requires manual intervention. In this paper, we propose the Adjoint Rigid Transform (ART) Network, a neural module which can be integrated with a variety of 3D networks to significantly boost their performance. ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks such as shape reconstruction, interpolation, non-rigid registration, and latent disentanglement. ART achieves this with self-supervision and a rotation equivariance constraint on predicted rotations. The remarkable result is that with only self-supervision, ART facilitates learning a unique canonical orientation for both rigid and nonrigid shapes, which leads to a notable boost in performance of aforementioned tasks. We will release our code and pre-trained models for further research
    corecore